Bandit Problems and Online Learning

نویسنده

  • Wes Cowan
چکیده

In this section, we consider problems related to the topic of online learning. In particular, we are interested in problems where data is made available sequentially, and decisions must be made or actions taken based on the data currently available. This is to be contrasted with many problems in optimization and model fitting, where the data under consideration is available at the start. Further, the outcome of the decisions made or actions taken may influence what data or actions become available in the future. In this way, this online decision process forms a sort of informational control problem; our goal is to optimally utilize the available information at any point in order to inform current action, and optimize future outcomes. We will consider a number of models.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online Learning with Composite Loss Functions

We study a new class of online learning problems where each of the online algorithm’s actions is assigned an adversarial value, and the loss of the algorithm at each step is a known and deterministic function of the values assigned to its recent actions. This class includes problems where the algorithm’s loss is the minimum over the recent adversarial values, the maximum over the recent values,...

متن کامل

Beetle Bandit : Evaluation of a Bayesian -

A novel approach to Bayesian Reinforcement learning (RL) named Beetle has recently been presented; this approach nicely balances exploration vs. exploitation while learning is performed online. This has produced an interest into experimental results obtained from the Beetle algorithm. This thesis gives an overview of bandit problems and modi es the Beetle algorithm. The new Beetle Bandit algori...

متن کامل

Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning

We present a learning algorithm for undiscounted reinforcement learning. Our interest lies in bounds for the algorithm’s online performance after some finite number of steps. In the spirit of similar methods already successfully applied for the exploration-exploitation tradeoff in multi-armed bandit problems, we use upper confidence bounds to show that our UCRL algorithm achieves logarithmic on...

متن کامل

Portfolio Choices with Orthogonal Bandit Learning

The investigation and development of new methods from diverse perspectives to shed light on portfolio choice problems has never stagnated in financial research. Recently, multi-armed bandits have drawn intensive attention in various machine learning applications in online settings. The tradeoff between exploration and exploitation to maximize rewards in bandit algorithms naturally establishes a...

متن کامل

The Knowledge Gradient Algorithm for a General Class of Online Learning Problems

We derive a one-period look-ahead policy for finiteand infinite-horizon online optimal learning problems with Gaussian rewards. Our approach is able to handle the case where our prior beliefs about the rewards are correlated, which is not handled by traditional multi-armed bandit methods. Experiments show that our KG policy performs competitively against the best known approximation to the opti...

متن کامل

Q-Learning for Bandit Problems

Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins Gittins & Jones, 1974]. Gittins' approach reduces the problem of nding optimal policies for the original MDP to a sequence of low-dimensional stopping...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016